GAT–GMM: Generative Adversarial Training for Gaussian Mixture Models
نویسندگان
چکیده
Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, generator and discriminator. While GANs achieve great success in learning complex image, sound, text data, they perform suboptimally multimodal distribution-learning benchmarks such as Gaussian mixture models (GMMs). In this paper, we propose Adversarial Training for Mixture Models (GAT-GMM), minimax GAN framework GMMs. Motivated by optimal transport theory, design GAT-GMM using random linear softmax-based quadratic discriminator architecture, which leads to nonconvex concave optimization problem. We show that gradient descent ascent (GDA) method converges an approximate stationary point benchmark case symmetric, well-separated Gaussians, further recovers true parameters underlying GMM. discuss application proposed GMMs distributed federated setting, where widely used expectation-maximization (EM) algorithm can incur computational communication costs. On other hand, provides scalable approach GDA still solve problem without incurring extra computation numerically support our theoretical results performing experiments is successful centralized tasks outperform standard EM-type algorithms setting.
منابع مشابه
Selective Sampling and Mixture Models in Generative Adversarial Networks
In this paper, we propose a multi-generator extension to the adversarial training framework, in which the objective of each generator is to represent a unique component of a target mixture distribution. In the training phase, the generators cooperate to represent, as a mixture, the target distribution while maintaining distinct manifolds. As opposed to traditional generative models, inference f...
متن کاملAdversarial examples for generative models
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model...
متن کاملMgan: Training Generative Adversarial Nets
We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing probl...
متن کاملMgan: Training Generative Adversarial Nets
We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing probl...
متن کاملGenerative Adversarial Networks as Variational Training of Energy Based Models
In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. The training of VGAN takes a two step procedure: given p(x), q(x) ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM journal on mathematics of data science
سال: 2023
ISSN: ['2577-0187']
DOI: https://doi.org/10.1137/21m1445831